Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Journal of Korean Medical Science ; : e142-2023.
Article in English | WPRIM | ID: wpr-976970

ABSTRACT

Background@#Heart rate variability (HRV) extracted from electrocardiogram measured for a short period during a resting state is clinically used as a bio-signal reflecting the emotional state. However, as interest in wearable devices increases, greater attention is being paid to HRV extracted from long-term electrocardiogram, which may contain additional clinical information. The purpose of this study was to examine the characteristics of HRV parameters extracted through long-term electrocardiogram and explore the differences between participants with and without depression and anxiety symptoms. @*Methods@#Long-term electrocardiogram was acquired from 354 adults with no psychiatric history who underwent Holter monitoring. Evening and nighttime HRV and the ratio of nighttime-to-evening HRV were compared between 127 participants with depressive symptoms and 227 participants without depressive symptoms. Comparisons were also made between participants with and without anxiety symptoms. @*Results@#Absolute values of HRV parameters did not differ between groups based on the presence of depressive or anxiety symptoms. Overall, HRV parameters increased at nighttime compared to evening. Participants with depressive symptoms showed a significantly higher nighttime-to-evening ratio of high-frequency HRV than participants without depressive symptoms. The nighttime-to-evening ratio of HRV parameters did not show a significant difference depending on the presence of anxiety symptoms. @*Conclusion@#HRV extracted through long-term electrocardiogram showed circadian rhythm. Depression may be associated with changes in the circadian rhythm of parasympathetic tone.

2.
Yonsei Medical Journal ; : 108-111, 2022.
Article in English | WPRIM | ID: wpr-919620

ABSTRACT

While digital health solutions have shown good outcomes in numerous studies, the adoption of digital health solutions in clinical practice faces numerous challenges. To prepare for widespread adoption of digital health, stakeholders in digital health will need to establish an objective evaluation process, consider uncertainty through critical evaluation, be aware of inequity, and consider patient engagement. By “making friends” with digital health, health care can be improved for patients.

3.
Yonsei Medical Journal ; : 692-700, 2022.
Article in English | WPRIM | ID: wpr-939384

ABSTRACT

Purpose@#Fetal well-being is usually assessed via fetal heart rate (FHR) monitoring during the antepartum period. However, the interpretation of FHR is a complex and subjective process with low reliability. This study developed a machine learning model that can classify fetal cardiotocography results as normal or abnormal. @*Materials and Methods@#In total, 17492 fetal cardiotocography results were obtained from Ajou University Hospital and 100 fetal cardiotocography results from Czech Technical University and University Hospital in Brno. Board-certified physicians then reviewed the fetal cardiotocography results and labeled 1456 of them as gold-standard; these results were used to train and validate the model. The remaining results were used to validate the clinical effectiveness of the model with the actual outcome. @*Results@#In a test dataset, our model achieved an area under the receiver operating characteristic curve (AUROC) of 0.89 and area under the precision-recall curve (AUPRC) of 0.73 in an internal validation dataset. An average AUROC of 0.73 and average AUPRC of 0.40 were achieved in the external validation dataset. Fetus abnormality score, as calculated from the continuous fetal cardiotocography results, was significantly associated with actual clinical outcomes [intrauterine growth restriction: odds ratio, 3.626 (p=0.031); Apgar score 1 min: odds ratio, 9.523 (p<0.001), Apgar score 5 min: odds ratio, 11.49 (p=0.001), and fetal distress: odds ratio, 23.09 (p<0.001)]. @*Conclusion@#The machine learning model developed in this study showed precision in classifying FHR signals. This suggests that the model can be applied to medical devices as a screening tool for monitoring fetal status.

4.
Psychiatry Investigation ; : 100-109, 2022.
Article in English | WPRIM | ID: wpr-926904

ABSTRACT

Objective@#We aimed to present the study design and baseline cross-sectional participant characteristics of biobank innovations for chronic cerebrovascular disease with Alzheimer’s disease study (BICWALZS) participants. @*Methods@#A total of 1,013 participants were enrolled in BICWALZS from October 2016 to December 2020. All participants underwent clinical assessments, basic blood tests, and standardized neuropsychological tests (n=1,013). We performed brain magnetic resonance imaging (MRI, n=817), brain amyloid positron emission tomography (PET, n=713), single nucleotide polymorphism microarray chip (K-Chip, n=949), locomotor activity assessment (actigraphy, n=200), and patient-derived dermal fibroblast sampling (n=175) on a subset of participants. @*Results@#The mean age was 72.8 years, and 658 (65.0%) were females. Based on clinical assessments, total of 168, 534, 211, 80, and 20 had subjective cognitive decline, mild cognitive impairment (MCI), Alzheimer’s dementia, vascular dementia, and other types of dementia or not otherwise specified, respectively. Based on neuroimaging biomarkers and cognition, 199, 159, 78, and 204 were cognitively normal (CN), Alzheimer’s disease (AD)-related cognitive impairment, vascular cognitive impairment, and not otherwise specified due to mixed pathology (NOS). Each group exhibited many differences in various clinical, neuropsychological, and neuroimaging results at baseline. Baseline characteristics of BICWALZS participants in the MCI, AD, and vascular dementia groups were generally acceptable and consistent with 26 worldwide dementia cohorts and another independent AD cohort in Korea. @*Conclusion@#The BICWALZS is a prospective and longitudinal study assessing various clinical and biomarker characteristics in older adults with cognitive complaints. Details of the recruitment process, methodology, and baseline assessment results are described in this paper.

5.
Endocrinology and Metabolism ; : 65-73, 2022.
Article in English | WPRIM | ID: wpr-924970

ABSTRACT

Background@#Most studies of systematic drug repositioning have used drug-oriented data such as chemical structures, gene expression patterns, and adverse effect profiles. As it is often difficult to prove repositioning candidates’ effectiveness in real-world clinical settings, we used patient-centered real-world data for screening repositioning candidate drugs for multiple diseases simultaneously, especially for diabetic complications. @*Methods@#Using the National Health Insurance Service-National Sample Cohort (2002 to 2013), we analyzed claims data of 43,048 patients with type 2 diabetes mellitus (age ≥40 years). To find repositioning candidate disease-drug pairs, a nested case-control study was used for 29 pairs of diabetic complications and the drugs that met our criteria. To validate this study design, we conducted an external validation for a selected candidate pair using electronic health records. @*Results@#We found 24 repositioning candidate disease-drug pairs. In the external validation study for the candidate pair cerebral infarction and glycopyrrolate, we found that glycopyrrolate was associated with decreased risk of cerebral infarction (hazard ratio, 0.10; 95% confidence interval, 0.02 to 0.44). @*Conclusion@#To reduce risks of diabetic complications, it would be possible to consider these candidate drugs instead of other drugs, given the same indications. Moreover, this methodology could be applied to diseases other than diabetes to discover their repositioning candidates, thereby offering a new approach to drug repositioning.

6.
Journal of Korean Medical Science ; : e198-2021.
Article in English | WPRIM | ID: wpr-899872

ABSTRACT

Background@#Vaccine safety surveillance is important because it is related to vaccine hesitancy, which affects vaccination rate. To increase confidence in vaccination, the active monitoring of vaccine adverse events is important. For effective active surveillance, we developed and verified a machine learning-based active surveillance system using national claim data. @*Methods@#We used two databases, one from the Korea Disease Control and Prevention Agency, which contains flu vaccination records for the elderly, and another from the National Health Insurance Service, which contains the claim data of vaccinated people. We developed a casecrossover design based machine learning model to predict the health outcome of interest events (anaphylaxis and agranulocytosis) using a random forest. Feature importance values were evaluated to determine candidate associations with each outcome. We investigated the relationship of the features to each event via a literature review, comparison with the Side Effect Resource, and using the Local Interpretable Model-agnostic Explanation method. @*Results@#The trained model predicted each health outcome of interest with a high accuracy (approximately 70%). We found literature supporting our results, and most of the important drug-related features were listed in the Side Effect Resource database as inducing the health outcome of interest. For anaphylaxis, flu vaccination ranked high in our feature importance analysis and had a positive association in Local Interpretable Model-Agnostic Explanation analysis. Although the feature importance of vaccination was lower for agranulocytosis, it also had a positive relationship in the Local Interpretable Model-Agnostic Explanation analysis. @*Conclusion@#We developed a machine learning-based active surveillance system for detecting possible factors that can induce adverse events using health claim and vaccination databases. The results of the study demonstrated a potentially useful application of two linked national health record databases. Our model can contribute to the establishment of a system for conducting active surveillance on vaccination.

7.
Healthcare Informatics Research ; : 182-188, 2021.
Article in English | WPRIM | ID: wpr-898530

ABSTRACT

Objectives@#Drug-induced QT prolongation can lead to life-threatening arrhythmia. In the intensive care unit (ICU), various drugs are administered concurrently, which can increase the risk of QT prolongation. However, no well-validated method to evaluate the risk of QT prolongation in real-world clinical practice has been established. We developed a risk scoring model to continuously evaluate the quantitative risk of QT prolongation in real-world clinical practice in the ICU. @*Methods@#Continuous electrocardiogram (ECG) signals measured by patient monitoring devices and Electronic Medical Records data were collected for ICU patients. QT and RR intervals were measured from raw ECG data, and a corrected QT interval (QTc) was calculated by Bazett’s formula. A case-crossover study design was adopted. A case was defined as an occurrence of QT prolongation ≥12 hours after any previous QT prolongation. The patients served as their own controls. Conditional logistic regression was conducted to analyze prescription, surgical history, and laboratory test data. Based on the regression analysis, a QTc prolongation risk scoring model was established. @*Results@#In total, 811 ICU patients who experienced QT prolongation were included in this study. Prescription information for 13 drugs was included in the risk scoring model. In the validation dataset, the high-risk group showed a higher rate of QT prolongation than the low-and low moderate-risk groups. @*Conclusions@#Our proposed model may facilitate risk stratification for QT prolongation during ICU care as well as the selection of appropriate drugs to prevent QT prolongation.

8.
Journal of Korean Medical Science ; : e198-2021.
Article in English | WPRIM | ID: wpr-892168

ABSTRACT

Background@#Vaccine safety surveillance is important because it is related to vaccine hesitancy, which affects vaccination rate. To increase confidence in vaccination, the active monitoring of vaccine adverse events is important. For effective active surveillance, we developed and verified a machine learning-based active surveillance system using national claim data. @*Methods@#We used two databases, one from the Korea Disease Control and Prevention Agency, which contains flu vaccination records for the elderly, and another from the National Health Insurance Service, which contains the claim data of vaccinated people. We developed a casecrossover design based machine learning model to predict the health outcome of interest events (anaphylaxis and agranulocytosis) using a random forest. Feature importance values were evaluated to determine candidate associations with each outcome. We investigated the relationship of the features to each event via a literature review, comparison with the Side Effect Resource, and using the Local Interpretable Model-agnostic Explanation method. @*Results@#The trained model predicted each health outcome of interest with a high accuracy (approximately 70%). We found literature supporting our results, and most of the important drug-related features were listed in the Side Effect Resource database as inducing the health outcome of interest. For anaphylaxis, flu vaccination ranked high in our feature importance analysis and had a positive association in Local Interpretable Model-Agnostic Explanation analysis. Although the feature importance of vaccination was lower for agranulocytosis, it also had a positive relationship in the Local Interpretable Model-Agnostic Explanation analysis. @*Conclusion@#We developed a machine learning-based active surveillance system for detecting possible factors that can induce adverse events using health claim and vaccination databases. The results of the study demonstrated a potentially useful application of two linked national health record databases. Our model can contribute to the establishment of a system for conducting active surveillance on vaccination.

9.
Healthcare Informatics Research ; : 182-188, 2021.
Article in English | WPRIM | ID: wpr-890826

ABSTRACT

Objectives@#Drug-induced QT prolongation can lead to life-threatening arrhythmia. In the intensive care unit (ICU), various drugs are administered concurrently, which can increase the risk of QT prolongation. However, no well-validated method to evaluate the risk of QT prolongation in real-world clinical practice has been established. We developed a risk scoring model to continuously evaluate the quantitative risk of QT prolongation in real-world clinical practice in the ICU. @*Methods@#Continuous electrocardiogram (ECG) signals measured by patient monitoring devices and Electronic Medical Records data were collected for ICU patients. QT and RR intervals were measured from raw ECG data, and a corrected QT interval (QTc) was calculated by Bazett’s formula. A case-crossover study design was adopted. A case was defined as an occurrence of QT prolongation ≥12 hours after any previous QT prolongation. The patients served as their own controls. Conditional logistic regression was conducted to analyze prescription, surgical history, and laboratory test data. Based on the regression analysis, a QTc prolongation risk scoring model was established. @*Results@#In total, 811 ICU patients who experienced QT prolongation were included in this study. Prescription information for 13 drugs was included in the risk scoring model. In the validation dataset, the high-risk group showed a higher rate of QT prolongation than the low-and low moderate-risk groups. @*Conclusions@#Our proposed model may facilitate risk stratification for QT prolongation during ICU care as well as the selection of appropriate drugs to prevent QT prolongation.

10.
Healthcare Informatics Research ; : 19-28, 2021.
Article in English | WPRIM | ID: wpr-874606

ABSTRACT

Objectives@#Many deep learning-based predictive models evaluate the waveforms of electrocardiograms (ECGs). Because deep learning-based models are data-driven, large and labeled biosignal datasets are required. Most individual researchers find it difficult to collect adequate training data. We suggest that transfer learning can be used to solve this problem and increase the effectiveness of biosignal analysis. @*Methods@#We applied the weights of a pretrained model to another model that performed a different task (i.e., transfer learning). We used 2,648,100 unlabeled 8.2-second-long samples of ECG II data to pretrain a convolutional autoencoder (CAE) and employed the CAE to classify 12 ECG rhythms within a dataset, which had 10,646 10-second-long 12-lead ECGs with 11 rhythm labels. We split the datasets into training and test datasets in an 8:2 ratio. To confirm that transfer learning was effective, we evaluated the performance of the classifier after the proposed transfer learning, random initialization, and two-dimensional transfer learning as the size of the training dataset was reduced. All experiments were repeated 10 times using a bootstrapping method. The CAE performance was evaluated by calculating the mean squared errors (MSEs) and that of the ECG rhythm classifier by deriving F1-scores. @*Results@#The MSE of the CAE was 626.583. The mean F1-scores of the classifiers after bootstrapping of 100%, 50%, and 25% of the training dataset were 0.857, 0.843, and 0.835, respectively, when the proposed transfer learning was applied and 0.843, 0.831, and 0.543, respectively, after random initialization was applied. @*Conclusions@#Transfer learning effectively overcomes the data shortages that can compromise ECG domain analysis by deep learning.

12.
Journal of Sleep Medicine ; : 84-92, 2020.
Article | WPRIM | ID: wpr-836299

ABSTRACT

Objectives@#Cheyne-Stokes respiration (CSR) is frequently found in critically ill patients and is associated with poor prognosis. However, CSR has not been evaluated in neurocritical patients. This study investigated the frequency and prognostic impact of CSR in neurocritical patients using biosignal big data obtained from intensive care units. @*Methods@#This study included all patients who received neurocritical care at the tertiary hospital from January 2018 to December 2019. Clinical information and biosignal data of intensive care units were used and analyzed. The respiratory curve was visually assessed to determine whether CSR and obstructive sleep apnea (OSA) were present, and a heart rate variability (HRV) was obtained from the electrocardiogram. @*Results@#CSR was confirmed in 166 of 406 patients (40.9%). Patients with CSR were older, had a higher frequency of cardiovascular risk factors as well as heart failure, and had a poor outcome (modified Rankin scale ≥4). As a result of multiple regression analysis adjusted for other variables, CSR was significantly associated with poor outcome with an odds ratio of 2.27 times higher (95% confidence interval 1.25–4.14, p=0.007). HRV analysis demonstrated that CSR and OSA had distinct autonomic characteristics. @*Conclusions@#This study first revealed the substantial frequency of CSR in neurocritical patients and suggests that it can be used as a predictor of poor prognosis in neurocritical care.

13.
Korean Journal of Anesthesiology ; : 275-284, 2020.
Article | WPRIM | ID: wpr-833990

ABSTRACT

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.

14.
Allergy, Asthma & Immunology Research ; : 343-356, 2019.
Article in English | WPRIM | ID: wpr-739412

ABSTRACT

PURPOSE: In asthmatic patients, treatment with corticosteroids, in addition to conventional risk factors for osteoporosis, may lead to bone loss. Trabecular bone score (TBS) is an indirect new parameter of bone quality. This study aimed to evaluate TBS in asthmatics in comparison to propensity score-matched controls and to investigate correlations between TBS and cumulative systemic and inhaled corticosteroid doses 1 year prior to bone mineral density (BMD) measurement in patients with asthma. METHODS: In total, 627 patients with asthma and the same number of non-asthmatic controls matched for sex and age were included in this retrospective cohort study. TBS was calculated in the lumbar region, based on 2 dimensional projections of dual-energy X-ray absorptiometry. RESULTS: Patients with severe asthma exhibited lower vertebral TBS values (1.32 ± 0.1) than those with non-severe asthma (1.36 ± 0.1, P = 0.001), with non-active asthma (1.38 ± 0.1, P < 0.001), and without asthma (1.39 ± 0.1, P < 0.001). No significant differences in BMD were noted among the study groups. TBS was significantly correlated with cumulative systemic and inhaled corticosteroid doses as well as asthma duration, lung function and airway hyper-responsiveness. A generalized linear model revealed that age, severe asthma, and frequency of oral corticosteroid burst were significant predictors for TBS levels. CONCLUSIONS: TBS can be used as an early indicator of altered bone quality stemming from glucocorticoid therapy or, possibly, more severe asthma.


Subject(s)
Humans , Absorptiometry, Photon , Adrenal Cortex Hormones , Asthma , Bone Density , Cohort Studies , Linear Models , Lumbosacral Region , Lung , Osteoporosis , Respiratory Hypersensitivity , Retrospective Studies , Risk Factors
15.
Healthcare Informatics Research ; : 201-211, 2019.
Article in English | WPRIM | ID: wpr-763937

ABSTRACT

OBJECTIVES: Biosignal data captured by patient monitoring systems could provide key evidence for detecting or predicting critical clinical events; however, noise in these data hinders their use. Because deep learning algorithms can extract features without human annotation, this study hypothesized that they could be used to screen unacceptable electrocardiograms (ECGs) that include noise. To test that, a deep learning-based model for unacceptable ECG screening was developed, and its screening results were compared with the interpretations of a medical expert. METHODS: To develop and apply the screening model, we used a biosignal database comprising 165,142,920 ECG II (10-second lead II electrocardiogram) data gathered between August 31, 2016 and September 30, 2018 from a trauma intensive-care unit. Then, 2,700 and 300 ECGs (ratio of 9:1) were reviewed by a medical expert and used for 9-fold cross-validation (training and validation) and test datasets. A convolutional neural network-based model for unacceptable ECG screening was developed based on the training and validation datasets. The model exhibiting the lowest cross-validation loss was subsequently selected as the final model. Its performance was evaluated through comparison with a test dataset. RESULTS: When the screening results of the proposed model were compared to the test dataset, the area under the receiver operating characteristic curve and the F1-score of the model were 0.93 and 0.80 (sensitivity = 0.88, specificity = 0.89, positive predictive value = 0.74, and negative predictive value = 0.96). CONCLUSIONS: The deep learning-based model developed in this study is capable of detecting and screening unacceptable ECGs efficiently.


Subject(s)
Humans , Dataset , Electrocardiography , Learning , Mass Screening , Monitoring, Physiologic , Noise , ROC Curve , Sensitivity and Specificity , Signal Detection, Psychological
16.
The Korean Journal of Internal Medicine ; : 90-98, 2019.
Article in English | WPRIM | ID: wpr-719281

ABSTRACT

BACKGROUND/AIMS: Olmesartan, a widely used angiotensin II receptor blocker (ARB), has been linked to sprue-like enteropathy. No cases of olmesartan-associated enteropathy have been reported in Northeast Asia. We investigated the associations between olmesartan and other ARBs and the incidence of enteropathy in Korea. METHODS: Our retrospective cohort study used data from the Korean National Health Insurance Service to identify 108,559 patients (58,186 females) who were initiated on angiotensin converting enzyme inhibitors (ACEis), olmesartan, or other ARBs between January 2005 and December 2012. The incidences of enteropathy were compared among drug groups. Changes in body weight were compared after propensity score matching of patients in the ACEis and olmesartan groups. RESULTS: Among 108,559 patients, 31 patients were diagnosed with enteropathy. The incidences were 0.73, 0.24, and 0.37 per 1,000 persons, in the ACEis, olmesartan, and other ARBs groups, respectively. Adjusted rate ratios for enteropathy were: olmesartan, 0.33 (95% confidential interval [CI], 0.10 to 1.09; p = 0.070) and other ARBs, 0.34 (95% CI, 0.14 to 0.83; p = 0.017) compared to the ACEis group after adjustment for age, sex, income level, and various comorbidities. The post hoc analysis with matched cohorts revealed that the proportion of patients with significant weight loss did not differ between the ACEis and olmesartan groups. CONCLUSIONS: Olmesartan was not associated with intestinal malabsorption or significant body weight loss in the general Korean population. Additional large-scale prospective studies of the relationship between olmesartan and the incidence of enteropathy in the Asian population are needed.


Subject(s)
Humans , Angiotensin Receptor Antagonists , Angiotensin-Converting Enzyme Inhibitors , Asia , Asian People , Body Weight , Cohort Studies , Comorbidity , Drug-Related Side Effects and Adverse Reactions , Incidence , Insurance Claim Review , Intestinal Diseases , Korea , National Health Programs , Propensity Score , Prospective Studies , Receptors, Angiotensin , Retrospective Studies , Weight Loss
17.
Healthcare Informatics Research ; : 242-246, 2018.
Article in English | WPRIM | ID: wpr-716031

ABSTRACT

OBJECTIVES: Electrocardiogram (ECG) data are important for the study of cardiovascular disease and adverse drug reactions. Although the development of analytical techniques such as machine learning has improved our ability to extract useful information from ECGs, there is a lack of easily available ECG data for research purposes. We previously published an article on a database of ECG parameters and related clinical data (ECG-ViEW), which we have now updated with additional 12-lead waveform information. METHODS: All ECGs stored in portable document format (PDF) were collected from a tertiary teaching hospital in Korea over a 23-year study period. We developed software which can extract all ECG parameters and waveform information from the ECG reports in PDF format and stored it in a database (meta data) and a text file (raw waveform). RESULTS: Our database includes all parameters (ventricular rate, PR interval, QRS duration, QT/QTc interval, P-R-T axes, and interpretations) and 12-lead waveforms (for leads I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, and V6) from 1,039,550 ECGs (from 447,445 patients). Demographics, drug exposure data, diagnosis history, and laboratory test results (serum calcium, magnesium, and potassium levels) were also extracted from electronic medical records and linked to the ECG information. CONCLUSIONS: Electrocardiogram information that includes 12 lead waveforms was extracted and transformed into a form that can be analyzed. The description and programming codes in this case report could be a reference for other researchers to build ECG databases using their own local ECG repository.


Subject(s)
Calcium , Cardiovascular Diseases , Demography , Diagnosis , Drug-Related Side Effects and Adverse Reactions , Electrocardiography , Electronic Health Records , Hospitals, Teaching , Korea , Machine Learning , Magnesium , Potassium
18.
Healthcare Informatics Research ; : 333-337, 2017.
Article in English | WPRIM | ID: wpr-195854

ABSTRACT

OBJECTIVES: Biosignal data include important physiological information. For that reason, many devices and systems have been developed, but there has not been enough consideration of how to collect and integrate raw data from multiple systems. To overcome this limitation, we have developed a system for collecting and integrating biosignal data from two patient monitoring systems. METHODS: We developed an interface to extract biosignal data from Nihon Kohden and Philips monitoring systems. The Nihon Kohden system has a central server for the temporary storage of raw waveform data, which can be requested using the HL7 protocol. However, the Philips system used in our hospital cannot save raw waveform data. Therefore, our system was connected to monitoring devices using the RS232 protocol. After collection, the data were transformed and stored in a unified format. RESULTS: From September 2016 to August 2017, we collected approximately 117 patient-years of waveform data from 1,268 patients in 79 beds of five intensive care units. Because the two systems use the same data storage format, the application software could be run without compatibility issues. CONCLUSIONS: Our system collects biosignal data from different systems in a unified format. The data collected by the system can be used to develop algorithms or applications without the need to consider the source of the data.


Subject(s)
Humans , Electrocardiography , Information Storage and Retrieval , Intensive Care Units , Monitoring, Physiologic , Photoplethysmography
19.
Healthcare Informatics Research ; : 75-76, 2017.
Article in English | WPRIM | ID: wpr-51906

ABSTRACT

No abstract available.

20.
The Korean Journal of Critical Care Medicine ; : 221-228, 2016.
Article in English | WPRIM | ID: wpr-770949

ABSTRACT

BACKGROUND: Injury severity scoring systems that quantify and predict trauma outcomes have not been established in Korea. This study was designed to determine the best system for use in the Korean trauma population. METHODS: We collected and analyzed the data from trauma patients admitted to our institution from January 2010 to December 2014. Injury Severity Score (ISS), Revised Trauma Score (RTS), and Trauma and Injury Severity Score (TRISS) were calculated based on the data from the enrolled patients. Area under the receiver operating characteristic (ROC) curve (AUC) for the prediction ability of each scoring system was obtained, and a pairwise comparison of ROC curves was performed. Additionally, the cut-off values were estimated to predict mortality, and the corresponding accuracy, positive predictive value, and negative predictive value were obtained. RESULTS: A total of 7,120 trauma patients (6,668 blunt and 452 penetrating injuries) were enrolled in this study. The AUCs of ISS, RTS, and TRISS were 0.866, 0.894, and 0.942, respectively, and the prediction ability of the TRISS was significantly better than the others (p < 0.001, respectively). The cut-off value of the TRISS was 0.9082, with a sensitivity of 81.9% and specificity of 92.0%; mortality was predicted with an accuracy of 91.2%; its positive predictive value was the highest at 46.8%. CONCLUSIONS: The results of our study were based on the data from one institution and suggest that the TRISS is the best prediction model of trauma outcomes in the current Korean population. Further study is needed with more data from multiple centers in Korea.


Subject(s)
Humans , Area Under Curve , Injury Severity Score , Korea , Mortality , ROC Curve , Sensitivity and Specificity , Trauma Centers
SELECTION OF CITATIONS
SEARCH DETAIL